专利摘要:
A method for correcting pixel spurious pixels of an infrared sensitive image capture device, the method comprising: receiving a first input image (RAW) and correcting the first image input by applying gain and offset values; detecting in the first corrected input image at least one spurious pixel, and adding said at least one spurious pixel to a list (LSPUR) (spurious pixels; receiving a second input image (RAW) and correcting the second image by calculating gain and offset correction values (sOff, sGain) for said at least one spurious pixel based on the corrected first and second input images.
公开号:FR3038194A1
申请号:FR1555963
申请日:2015-06-26
公开日:2016-12-30
发明作者:Amaury Saragaglia;Alain Durand
申请人:Ulis SAS;
IPC主号:
专利说明:

[0001] B14175EN 1 CORRECTION OF PIXEL PARASITES IN AN INFRARED IMAGE SENSOR Domain The present description relates to the field of infrared image capture devices, and in particular a method and a device for correcting parasitic pixels in an image captured by a network. of pixels sensitive to infrared light. BACKGROUND OF THE INVENTION Infrared (IR) image capturing devices, such as micro-bolometers or cooled IR image capturing devices, include a network of infrared sensitive detectors forming a pixel array. To correct spatial non-uniformities between the pixels of such a pixel array, offset and gain correction is generally applied to each pixel signal (or "pixel value") of an image captured before it is displayed. The offset and gain values are generated during a factory preliminary calibration phase of the device using uniform emitter sources (black bodies) at controlled temperatures, and are stored by the image capture device. Such spatial non-uniformities vary not only in time but also as a function of the temperature of the optical, mechanical and electronic components of the image-capturing device, and therefore an internal mechanical shutter is often used in the capture device image to facilitate image correction. This involves periodically capturing an image while the shutter is closed to obtain a reference image of a relatively uniform scene that can then be used for calibration. It is common that, as a result of the process of manufacturing such infrared image capturing devices, one or more pixels of the pixel array are declared non-functional at the end of the manufacturer's initial calibration phase. Such pixels are generally known in the art as "bad pixels", and they are identified in an operability map stored by the image capture device. The pixel values generated by the bad pixels can not usually be considered reliable, and therefore their values are replaced by a value generated on the basis of neighbors in the image. Furthermore, it has been discovered that during the lifetime of pixel pixels such image capture devices, the signal behavior of one or more initially functional pixels could no longer be acceptablely described by their initial calibration parameters. . This may be due to various physical changes or even mechanical damage caused by tiny moving internal particles left or released in the sensor housing for example. These pixels will be called here parasitic pixels. Such pixels are not listed in the initial operability map, and they can degrade the quality of the image. In the case of image capture devices equipped with a shutter, the French patent application published under the number FR3009388 describes a method for identifying such parasitic pixels during a shutter closure period 3038194 B14175EN 3 any, providing means for recurrently updating the operability map. However, there are several disadvantages to the use of a shutter, such as the additional weight and cost and fragility of this component. In addition, in some applications, the use of a shutter is unacceptable because of the time lost while the shutter is closed and calibration is taking place. During this calibration period, no image of the scene can be captured.
[0002] In an image capture device without a shutter, there is a technical difficulty in identifying such spurious pixels in the image scene, particularly if the pixel values are in a textured area of a captured image .
[0003] Assuming that the spurious pixels can be identified, such spurious pixels could simply be added to the list of bad pixels. However, if the image capture device receives, for example, multiple shocks during its lifetime to such an extent that the density of spurious pixels in the image can no longer remain negligible, image degradation could occur. . There is therefore a need in the art, particularly for non-shutter infrared image capture, a device and method for detecting spurious pixels, at least to update the operability map, but also to recalibrate the particular parasitic pixels which would have become badly calibrated. SUMMARY An object of embodiments of the present disclosure is to at least partially solve one or more needs of the interior art. In one aspect, there is provided a method of correcting pixel pixels of an infrared sensitive image capture device, the method comprising: receiving, by a device 3038194 B14175EN 4 image capture system, a first input image captured by the pixel array, and correcting the first input image by applying gain and offset values to pixel values in the first image of Entrance ; detecting in the first corrected input image at least one spurious pixel, and adding said at least one spurious pixel to a spurious pixel list; receiving, by the processing device, a second input image captured by the pixel array and correcting the second input image by applying the gain and offset values to pixel values in the second input image ; and calculating gain and offset correction values for said at least one spurious pixel based on the corrected first and second input images. For example, correcting the first and second input images includes correcting pixel values for pixels at the same locations in the first and second input images. According to one embodiment, the method further comprises enabling the gain and offset correction values by applying them to correct the values of said at least one spurious pixel in a third image captured by the pixel array and detecting if said at least one spurious pixel is still detected as a spurious pixel in the third image.
[0004] According to one embodiment, the third input image is captured at a temperature of the pixel array different from that of each of the first and second input images. According to one embodiment, the method further comprises, before calculating the gain correction and offset values, adding said at least one detected spurious pixel to a list of bad pixels, and removing said at least one detected spurious pixel from the badpixel list if the gain and offset correction values are validated during the validation step.
[0005] According to one embodiment, the array of pixels comprises columns of pixels, each column being associated with a corresponding reference pixel, and the correction of the first and second input images comprises: determining, on the basis of the input image and a column component vector representing a column deviation introduced by -the reference pixels of the pixel array, a first scale factor by estimating a level of the column deviation present in the input image; generating column offset values based on the product of the first scale factor by the values of the column component vector; determining, on the basis of the input image and a 2D dispersion matrix representing the 2D dispersion introduced by the pixel array, a second scale factor by estimating a level of 2D dispersion present in the input image; generating pixel offset values based on the product of the second scale factor by the values of the 2D dispersion matrix; and generating the corrected image by applying the column offset and pixel values.
[0006] According to one embodiment, the corrected image is generated on the basis of the equation: CORKx, = GAIN (x, y) x (RAW (x, y) - a. OFF'coL (x, y) - [3. OFFDIsp (x, y) - y) where RAW is the input image, a and p are scale factors, y is a gain correction value, GAIN (x, y) is a 25 gain value, OFFoeL (x, y) and OFFDIsp (x, y) are offset values, OFFCOL being a matrix comprising, in each of its rows, the column vector VCOL, where OFFDIsp is the reference dispersion matrix. According to one embodiment, calculating the gain correction and offset values for said at least one spurious pixel on the basis of the corrected first and second input images comprises: estimating, based on neighboring pixels in the first an input image, a first expected pixel value of each of said at least one stray pixel; estimating, based on neighboring pixels in the second input image, a second expected pixel value of each of said at least one stray pixel; and calculating the gain correction and offset values based on the estimated first and second expected pixel values.
[0007] According to one embodiment, the detection of said at least one spurious pixel comprises: calculating a score for a plurality of target pixels comprising at least some of the pixels of the first input image, the score for each target pixel being generated on the based on k adjacent pixels connected to the input image in a window of H by H pixels around the target pixel, H being an odd integer equal to or greater than 3, and k being an integer between 2 and 5, in wherein each of the connected neighboring pixels shares a boundary or corner with at least one of the connected neighboring pixels and / or the target pixel and wherein at least one of the connected neighboring pixels shares a border or corner with the pixel target; and detecting that at least one of the pixels is a spurious pixel based on the calculated scores. According to one embodiment, the detection of said at least one spurious pixel comprises comparing at least some of the scores to a threshold value. According to one embodiment, comparing at least some of the scores to a threshold value involves comparing a subset of the scores to the threshold value, the subset including a plurality of the highest scores, and the threshold value is calculated on the basis of the following equation: thrspuR = Q3 + xEI> <(Q3 - Q1) where XEI is a parameter equal to at least 1.0 and Q1 and Q3 are the first and third quartiles respectively of the subset. According to one embodiment, said at least some scores are selected by applying another threshold to the calculated scores: According to one embodiment, the other threshold is calculated on the basis of an assumption that the pixel values in the image have a probability distribution based on the Laplace distribution.
[0008] According to one embodiment, the other threshold is calculated on the basis of the following equation: ln (4) ln (3) throutuer = + 1.5 X where λ is an estimate of the parameter of the exponential distribution f (x ) = 2e-Ax corresponding to the absolute value of the 10 calculated scores. In another aspect, there is provided a computer readable storage medium storing instructions for carrying out the aforementioned method when performed by a processing device.
[0009] In another aspect, there is provided an image processing device comprising: a memory storing offset and gain values and a list of spurious pixels; a processing device adapted to: receive a first input image captured by a pixel array of an infrared-sensitive image pickup device, and correct the input image first by applying the image values; gain and offset at pixel values in the first input image; detecting in the first corrected input image at least one spurious pixel, and adding said at least one spurious pixel to the spurious pixel list; receiving a second input image captured by the pixel array and correcting the second input image by applying the gain and offset values to pixel values in the second input image; and calculating gain and offset correction values for said at least one spurious pixel based on the corrected first and second input images. According to one embodiment, the processing device is further adapted to validate the gain and offset correction values by applying them to correct the values of said at least one parasitic pixel in a third input image captured by the control network. pixels and detecting if said at least one spurious pixel is still detected CORIfile 5 parasitic pixel in the third image. Brief Description of the Drawings The foregoing and other features and advantages thereof will become apparent with the following detailed description of embodiments, given by way of illustration and not limitation, with reference to the accompanying drawings in which: FIG. schematically illustrates an image capture device according to an exemplary embodiment; Figure 2 schematically illustrates an image processing block of the image capture device of Figure 1 in more detail according to an exemplary embodiment; Fig. 3 is a flowchart showing operations in a spurious pixel detection and correction method according to an exemplary embodiment of the present disclosure; Fig. 4 is a flowchart illustrating operations in a method of generating offset correction and gain values according to an exemplary embodiment of the present disclosure; operations in an offset correction and gain correction method according to the embodiment; Figure 6 is a flowchart 30 operations in an identification method according to an exemplary embodiment; Fig. 7 is a flow chart operations in a method of generating pixel scores according to an exemplary embodiment; FIG. 8A illustrates an exemplary selection of neighboring pixels connected according to an exemplary embodiment of the present description; FIG. 8A illustrates an exemplary selection of neighboring pixels connected according to an exemplary embodiment of the present description; Fig. 8B illustrates examples of connected and unconnected neighbor pixels according to an exemplary embodiment; and FIG. 8C illustrates an example of an edge and a stray pixel according to an example embodiment. DETAILED DESCRIPTION While some of the embodiments in the following description are described in connection with a micro-bolometer pixel array, it will be apparent to those skilled in the art that the methods described herein could also be applied to other types of IR image capture devices, including cooled devices. Further, although embodiments are described herein in connection with a non-shutter IR image capture device, these embodiments could also be applied to an IR image capture device including a mechanical shutter. and images captured by such a device.
[0010] Figure 1 illustrates an IR image capture device 100 comprising a pixel array 102 responsive to IR light. For example, in some embodiments, the pixel array is responsive to long wave IR light, such as light having a wavelength of 7 to 13 μm. The device 100 is for example capable of capturing single images and also sequences of images constituting video. The device 100 is for example a device without shutter. For ease of illustration, FIG. 1 shows a pixel array 102 of only 144 pixels 104, arranged in 12 rows and 12 columns. In alternative embodiments, the pixel array 102 could include any number of rows and columns of pixels. Typically, the network comprises for example 640 by 480, or 1024 by 768 pixels. Each column of pixels of the network 102 is associated with a corresponding reference structure 106. Although it is not functionally a picture element, this structure will be called here "reference pixel" by structural analogy with the image forming (or active) pixels 104. In addition, an OUTPUT block 108 is coupled to each column of the pixel array 102 and each of the reference pixels 106, and provides a raw RAW image. . For example, a control circuit (CTRL) 110 provides control signals to the pixel array, the reference pixels 106 and the output block 108. The raw image RAW is, for example, supplied to an image processing block. (IMAGE PROCESSING) 112, which applies offsets and gains to the pixels of the image, to produce a corrected CORR image. Each of the pixels 104 comprises for example a bolometer. Bolometers are well known in the art, and include, for example, a membrane suspended over a substrate, comprising a layer of infrared absorbing material and having the property that its resistance is modified by the temperature rise of the membrane associated with the presence of IR radiation.
[0011] The reference pixel 106 associated with each column comprises, for example, a blind bolometer, which for example has a structure similar to that of the active bolometers of the pixels 104 of the network, but which is rendered insensitive to radiation originating from the image scene, for example by a screen consisting of a reflective barrier and / or by a heat sink obtained by design, for example by ensuring a high thermal conductance to the substrate, the bolometer being formed for example in direct contact with the substrate. During a reading operation of the pixel array 102, the rows of pixels are for example read one by one. An example of a bolometer-type pixel array is, for example, described in more detail in United States Patent US 7700919, assigned to the Applicant.
[0012] FIG. 2 illustrates the image processing block 112 of FIG. 1 in more detail according to an exemplary embodiment. The functions of the image processing block 112 are for example implemented by software and the image processing block 112 comprises a processing device (PROCESSING DEVICE) 202 comprising one or more processors under the control of instructions. stored in an instruction memory (INSTR MEMORY) 204. In alternative embodiments, the functions of the image processing block 112 10 could be implemented partially by dedicated hardware. In such a case, the processing device 202 comprises for example an ASIC (application specific integrated circuit) or an FPGA (on-site programmable gate network), and the instruction memory 204 can be omitted.
[0013] The processing device 202 receives the raw input image RAW and generates the corrected image CORR, which is for example supplied to a display (not shown) of the image capture device. The processing device 202 is also coupled to a data memory (MEMORY) 206 storing offset values (OFFSET) 208, gain values (GAIN) 210, a list (LSPUR) 212 of identified spurious pixels and a list (L BADPIXEL) bad pixels. The offset values are for example represented by a vector Vcol, representing a structural column deviation, and an OFFDIsp matrix representing a non-column 2D structural dispersion introduced by the pixel array 102. The column difference results for example mainly the use of the reference pixel 106 in each column, while the row of column reference pixels is generally not perfectly uniform. Non-column 2D dispersion results for example mainly from local physical and / or structural differences between the active bolometers of the pixel array, resulting for example from technological dispersions in the manufacturing process.
[0014] The generation of the VCOL vector and of the OFFDIsp matrix, and the correction of pixel values on the basis of this vector and of the matrix are described in more detail in the patent application US-US-4/695539, filed on April 24th. 2015, assigned to the plaintiff, and in the French patent application FR14 / 53917 filed April 30, 2014, and in the Japanese patent application JP2015-093484, filed April 30, 2015, also in the name of the plaintiff. A method as described in these applications will now be described. It will be assumed that the raw image RAW has been captured by the pixel array 102 of FIG. 1, and that the pixel array is of the type where each column of the network is associated with a corresponding reference pixel 106. For example, a corrected image CORR is generated on the basis of the RAW raw image by applying the following equation: CORR (x, y) GAIN (x, y) x (RAW (x, y) - aOFF'cojx, y) flOFFDisp (x, y) - y) - res -1 where x, y are the pixel coordinates 104, a and p are scale factors, y is a gain correction value, GAIN (x, y ) is a gain value, OFFoeL (x, y) and OFFDIsp (x, y) are offset values, OFFCOL being a matrix comprising, in each of its rows, the column vector VCOL, where OFFDISP is the matrix of reference dispersion, and res is a residual correction, which for example is used in some embodiments to correct the remaining column residues and / or dispersion residues in the image. The scaling factor a is, for example, determined on the basis of the following equation: a == -2 Ex 7 '(17 COL (Z)) XT (V COL (X)) where T () is a filter high pass applied to the column vector VCOL and the input image RAW, and m is the number of ExGi-Ey T (RAW (x, y)) XT CO L (X))) 3038194 B14175 13 rows in the picture. In other words, the determination of the scaling factor a implies for example the application of the high-pass filter to the raw image according to its rows and also to the reference column vector, the determination of averages of 5 columns of the filtered image, giving a vector of the same size as the reference column vector, then determining the scale factor as minimizing the differences between the two vectors, ie between the column means of the filtered image and the filtered column vector 10. The scaling factor p is for example determined on the basis of the following equation E (v, RAw.v, oFFDisp + v, RAw.vy0FFD, sp) E ((vxoFFDisp) 2+ (vy0FFDisp) 2) where Vx is the pixel gradient value between adjacent pixels in the horizontal direction of the image, in other words following each row, and Dy is the pixel gradient value between adjacent pixels in the vertical direction of the pixel. image, in other words following each column. Although in the following the gain and offset correction is described as being based on the aforementioned equations 1 and 3, in alternative embodiments other correction methods could be used. Fig. 3 is a flowchart showing operations in a method for detecting and correcting spurious pixels according to an exemplary embodiment. This method is for example implemented by the circuit 112 of FIG. 2 each time a new image is captured. Spurious pixels are pixels for which offset and gain have deviated from their original calibrated values, e.g. as a result of mechanical shock or damage due to minute moving internal particles. A stray pixel may correspond to an additional "bad pixel", which for example has been destroyed and is therefore unable to provide a usable signal in relation to the scene. However, the present inventors have found that often a stray pixel may still be able to give a usable signal in relation to the scene, but that its value has steadily become stably offset in terms of offset and gain by compared to the original calibrated value. In an operation 301, spurious pixels are detected in a CORR image, which corresponds to a raw RAW image corrected on the basis of the gain and offset values. The 10 parasitic pixels detected form the LSPUR list. As will be described in more detail below, in one embodiment the parasitic pixels are detected on the basis of a calculation of the distance (in terms of pixel values). relative to connected neighbors of each pixel in the image. However, in alternative embodiments, other techniques could be applied to identify stray pixels. For example, one or more spurious pixels could be manually identified by a user. In addition, in some embodiments, the CORR image could be a uniform scene, for example if it is captured with a closed shutter of an image capture device, thereby facilitating the identification of spurious pixels. In a next operation 302, offset correction and gain values are calculated for each identified phantom pixel. In other words, for each identified spurious pixel, correction values are calculated to correct the stored current offset and gain values. This calculation is for example based on at least two captured images.
[0015] In a subsequent operation 303, validation of these offset and gain correction values is for example performed for each stray pixel. The validation is for example carried out at a temperature of the focal plane different from the temperature at the moment when the offset correction and gain values have been calculated, in order to verify that the calculated corrections provide an appropriate correction for these pixels. when the temperature of the focal plane changes. The temperature of the focal plane corresponds to the temperature of the pixel array. In other words, the inventors have noticed that at least a portion of the detected spurious pixels can still be corrected permanently, even if their offsets and gains have shifted, and the quality of the offset correction values. and calculated gain obtained for these parasitic pixels is for example verified by their stability 10 while the temperature of the focal plane has changed. If in operation 303, the gain correction and offset values are validated, the gain and offset values 208, 210 for the spurious pixels are for example updated, in an operation 304, by the values of 15. correction in the operation 302. In the other case, if in operation 303 the gain correction and offset values are not validated, in an operation 305 the spurious pixels are added to a list of bad pixels. In other words, the spurious pixels for which the corrections are unstable with the focal plane temperature changes are classified as additional bad pixels. Pixels on the list of bad pixels have, for example, their pixel value replaced by a pixel estimate based on one or more of their neighboring pixels. In an alternative embodiment, all the parasitic pixels identified in the operation 301 are systematically added to the list of bad pixels, and are then removed from this list only if the correction is validated in the operation 303. In certain modes rather than attempting to correct the pixels identified as spurious, operations 302, 303 and 304 could be omitted, and the method could systematically involve the addition of all detected spurious pixels to the list of bad pixels in operation 305. Such an approach would save the processing cost associated with operations 302 and 303. In yet another variant, some pixels could be initially added to the bad pixels list, and if the number of bad pixels exceeds one threshold level, one or more previously identified parasitic pixels, or one or more newly identified parasitic pixels, The following is a flowchart illustrating an example of operations for calculating offset and gain correction values in the operation 302 of FIG. 3. In an operation 401, FIG. an image is for example captured and the pixel values of at least some of the pixels are corrected using the offset and gain values 208, 210, for example on the basis of the aforementioned equation 1. The process inputs are, for example, the raw captured image RAW, the offset values FFCOL and OFFDISP, the gain values GAIN and the terms a, p and y used to correct the image according to the aforementioned equation 1. It will also be assumed that the LSPUR list of interfering pixels has been generated in the operation 301 of FIG. 3. In an operation 402, the pixels pi of the list LSPUR for which the detection frequency FREQ exceeds a threshold level FREQmIN are selected, and the subsequent operations of the method are performed only on these pixels. This operation means, for example, that the correction algorithm is applied only to pixels that are repeatedly detected as spurious pixels. For example, each time a pI pixel is detected as a spurious pixel, the detection frequency FREQ is calculated as being equal to the number of times the pixel has been detected as spurious in the previous N frames, where N is by example between 2 and 20. If this frequency is greater than FREQmIN, equal for example to N / 2, the pixel is selected. In some embodiments, this operation is omitted, and subsequent process operations are applied to all pixels on the LspuR list. In an operation 403, an expected value PEXP of the pixel is calculated. For example, when a pixel has become parasitic, its CPORR value, after the gain and offset correction, but which has been identified as aberrant, can be expressed by: PCORR = g X (PSPUR at X ° COL - X opisp - y) - res -4 wherePSPUR is the pixel value of the parasitic pixel of matrix 10 RAW, ° COL and ° DISP are the values of the OFF = and OFFDISP matrix applied to the pixel, g is the value of the GAIN matrix applied to pixel, and a, (3, y and res are the same as in the aforementioned equation 1. Assuming that this pixel can be appropriately corrected, there are gain and offset correction values s Gain and soff such that: PEXP = l9 -% can) X (PSPUR a X oc - x (o - DISP Sof f) - y) - res -5 where pExp is the expected pixel value, and is for example equal to or near of the value that could have been obtained if the gain and offset values had been recalculated on the basis of one or more new images of Since there are two unknowns, to determine the values of sGain and soff, two expected values are for example calculated as will be described now. The value of the expected value of PEXP 25 example calculated on the basis of its neighboring pixels. For example, an algorithm commonly used to correct bad pixels is applied, such as interpolation of pixel data, extrapolation, and / or a technique known in the art as "Inpainting".
[0016] In an operation 404, it is checked whether, in addition to the new PEXP value, a previous value PEXP1 or PEXP2 is also available for the pixel in other words if the set n LEXP1 PEXP2} PSPUR, Pi is empty or not . If a previous value PEXP1 is per exists, this implies that it has been determined for a previous image in which the pixel values of at least some of the pixels have been corrected using the same offset values and gain 208, 210 than those applied to the current image. In other words, the 5 locations of the corrected pixels in the current and previous images are for example the same. If the set {PExpi) PEXP2} pi is empty and there are no previous values, in an operation 405 the value of PEXP is stored as PEXP1 / and the scaling factors a and p and the value The gain correction value applied to the pixel is stored as al, pl, y1, and the pixel value PSPUR is also stored as a value PSPUR1. In an operation 406, the next pixel of the LSPUR list for which the detection frequency FREQ is greater than FREQMIN is for example selected, and the method returns to the operation 403. If, when the operation 404 is performed, there is already a value of PEXP1 for the pixel, in a subsequent operation 407, one For example, determines whether the absolute difference between the new value PEXP and the previous value PEXP1 is greater than a threshold value thrdiffmin. If not, the method returns to the operation 406. Otherwise if the pixel values are sufficiently spaced, the next operation is the operation 408. In the operation 408, the new value PEXP is stored as PEXP2 and the scaling factors a and p and the gain correction value y applied to the pixel are stored as values a2, p2, y2, and the pixel value PSPUR is also stored as value PSPUR2- In an operation 409, correction values Shift and gain SGain and SOff are for example. calculated on the basis of the PEXP1 and PEXP2 estimates- For example, the soff value is calculated on the basis of the following equation: Sof f -7 PEXP2x (PSPUR1-61) -PEXP1x (PSPUR2-Ô2) ie2 xPEXPi -131xPEXP2 + 13i X ornsp + Yi where δi = ai X acm, 3038194 B14175EN 19 The value of SGain is for example calculated on the basis of the following equation: PSPUR1-61 + 131) (2.0 ffset -8 Of course, it would be possible to first calculate a value of SGain and then substitute this value in order to calculate the value Soff.In a following operation 410, the gain and offset correction values sGAiN and soFF are for example stored in the LSPUR list in association with the pixel 10 PSPUR- The method then returns, for example, to the operation 406 until all the pixels of the LSPUR list for which the detection frequency FREQ is greater than FREQmIN have been processed, the method is then repeated for example when a next image is captured.
[0017] Although in some embodiments the calculated offset and gain correction values soff and SGain gain and offset values 208, 210, at least one check of these values is performed, for example, to verify their validity in the presence of of a temperature change, as will now be described with reference to FIG. 5. FIG. 5 is a flow diagram illustrating an exemplary implementation of the operation 303 of FIG. 3 to validate the correction values of FIG. offset and gain for one or more pixels. In an operation 501 an image is captured and corrected using the offset and gain values 208, 210, providing as inputs the captured image RAW, and the offset values OFFCOL, OFFDISP, gain values GAIN, and the terms a, p and y used to correct the image in accordance with equation 1 above. In addition, an indication of the temperature of the focal plane is for example received. Indeed, as previously mentioned in connection with SGain = g PEXP1 can be used to directly modify the operation 303 of FIG. 3, a validation of the offset and gain values is for example performed at a temperature the focal plane differs from when the offset and gain values were calculated. Thus, the indication of the focal plane temperature is used to check if the focal plane temperature has changed. In the example of FIG. 5, the temperature indication is given by the value of the scale factor p, which varies with the temperature. The inventors have noticed that the temperature information provided by the factor p is sufficiently reliable in this context of validation of the new corrections of parasitic pixels. However, in alternative embodiments, a temperature value T generated by a temperature sensor could be used. For example, the pixel array 15 includes a temperature sensor integrated into or in contact with the array to provide the temperature of the focal plane. In an operation 502, pixels are for example selected from the pixels pi of the LSPUR list for which the detection frequency FREQ exceeds a threshold level FREQMIN and the following operations of the method are performed only on these pixels. In an operation 503, it is then determined whether gain and offset correction values exist for a first one of pi pixels. If so, the next operation is the operation 504, while in the negative the next operation is the operation 505 in which the next pixel in the list is selected and the process returns to the operation 503. In In other embodiments, the method of FIG. 5 could be applied to all the pixels Pi of the list for which offset correction and gain values have been calculated, independently of the detection frequency. Thus operation 502 could be omitted. In an operation 504, it is determined whether the p-value dependent on the current temperature is equal to or close to one or the other of the values p1 and p2 associated with the pixels PSPUR1 and PSPUR2 stored in the LSPUR list in operations 405 and 408 respectively of the method of FIG. 4. For example, it is determined whether the absolute difference between p and pi is greater than a min threshold, and whether the absolute difference between p and p2 is greater than the min threshold. If either of these differences is below the threshold, the process returns for example to operation 505. On the other hand, if there has been a significant change in temperature (change of (3) since calculation the gain and offset correction values, the next operation is operation 506. As previously mentioned, rather than using the scale factor p as an indication of the temperature, a temperature value T In such a case, the values p, p1 and p2 will be replaced in the operation 504 of FIG. 5 by temperatures T, T1 and T2 respectively, where the values T1 and T2 can be captured by a temperature sensor. are temperature values measured in relation to the preceding images and stored in the operations 405 and 408, respectively, of FIG. 4.
[0018] In step 506, the gain correction and offset values for the pi pixel are used as a test to correct the PSPUR pixel value obtained for the captured image in step 501, for example by applying Equations 1 , 2 and 3 above, with the modified gain and offset values as in equation 5. In an operation 507, it is then determined whether the modified value of the pixel pi is still aberrant, and other terms if is still identified as a stray pixel. For example, the technique used in step 301 to detect spurious pixels is applied to the image with the corrected pi pixel. If the value is not aberrant, the correction values are considered valid, since the temperature of the focal plane has been found, in step 504, to be sufficiently far from its two previous values, and B14175FR 22 despite this temperature change the pixel value is not aberrant. Thus, in a subsequent operation 508, new offset and gain values corrected using the soffset and sGain correction values are, for example, stored in the shift and gain tables 208, 210, and in an operation 509 they are are removed from the LSPUR list of spurious pixels. Otherwise, if the pixel pi is still aberrant, then it is assumed, for example, that the pixel can not be corrected by corrections of the gain and offset values. The pixel is therefore for example added to the list L BADPIXEL of the bad pixels in an operation 510, then the operation 509 is performed to remove the pixel from the list LSPUR- An example of a method for detecting parasitic pixels implemented FIG. 6 is a flowchart illustrating an example of operations in a parasitic pixel detection method in a captured image. The method is for example implemented by the processing device 112 of FIG. 2, and the captured image has for example been corrected by applying the offset and gain values 208, 210. In an operation 601, a score is calculated for each pixel of the input image on the basis of a distance in terms of pixel values calculated with respect to connected neighboring pixels. In an operation 602, outliers. are for example identified by comparing the scores calculated at a first threshold. This step is for example used to select only a subset of the pixels as potential parasitic pixels. In some embodiments, however, this step could be omitted. In an operation 603, spurious pixels are for example identified based on the outliers identified in operation 602 (or from the entire image in the event that operation 602 is omitted). Fig. 7 is a flowchart illustrating an example of the operations for carrying out the operation 601 of Fig. 6 to produce the scores. This method is for example applied to each pixel of the image in turn, for example in a raster scan order, although the pixels can be processed in any order. The operations of this method will be described with reference to FIG. 8A.
[0019] FIG. 8A illustrates nine views 801 to 809 of a 5 by 5 pixel window representing an exemplary application of the method of FIG. 7. More generally, the size of the window can be defined as H × H, where H is an odd integer equal to at least 3, and for example at least 5. In some embodiments, H is equal to or less than 15. The window of H by H is around a target pixel for which a score must be generated, in other words, the target pixel is for example the central pixel of the window. Referring again to FIG. 7, in an operation 701, a list of neighbors of the connected pixel is generated. Connected neighbors are all pixels that share a border or corner with a pixel that has already been selected. Thus, for a pixel that is not at an edge of the image, there will be eight connected neighbors. Initially, only the pixel for which a score is to be generated is selected. This pixel will be called here target pixel. For example, as represented by the view 801 of FIG. 8A, a score must be calculated for a central pixel, grayed out in the figure, having a pixel value of 120. As represented by the 802 view, the connected neighbors are the eight pixels surrounding the central pixel 120. In an operation 702, among the connected neighbors, a pixel having a pixel value having the smallest distance from the target pixel value is selected.
[0020] For example, the distance d (a, b) between pixel values and b is defined by d (a, b) == la - bl. As shown by the view 803 in FIG. 8A, a pixel having a value of 120, equal to the target pixel value, is selected. In an operation 703, the neighbor selected in operation 702 is removed from the connected neighbor list of the target pixel, and new connected neighbors are added which include connected neighbors of the newly selected neighbor identified in operation 702. By For example, as shown by view 804 in FIG. 8A, three new pixels connected to the newly selected pixel are added to the list. In an operation 704, it is determined whether k connected neighbors have been selected. The number k of neighbors to be considered is for example a fixed parameter which is selected on the basis of the highest expected number of connected spurious pixels. For example, for some image sensors, it can be considered that the parasitic pixels are always isolated from each other. In such a case, k can be chosen as equal to just 2. As a variant, if it is considered possible that for a given image sensor two connected parasitic pixels can be identified, a value of k higher is, for example selected, for example equal to a value between 3 and 5. In the example of FIG. 8A, k is equal to 4. If k neighbors have not yet been selected, the method returns to operation 702, in which a new connected neighbor is selected again. Operations 703 and 704 are then repeated until k neighbors have been selected, and then an operation 705 is performed. As shown in views 805 to 809 of FIG. 8A, a block 30 of four neighbors of the target pixel is selected. Figure 8B illustrates views of a window of H by H pixels, and demonstrates the difference between a distance calculation based simply on the nearest neighbors in a window, and a calculation based on the nearest connected neighbors.
[0021] As shown by a view 810, the central pixel is aberrant, so that the difference between its value and its neighborhood is high. A view 811 represents four selected neighbors in the window that have the values closest to the central pixel, and are not connected to it. This calculation will lead to a low score indicating that the pixel is not aberrant. A view 812 represents four selected connected neighbors. In this case, four completely different pixels are selected, and the scores clearly indicate that the target pixel is aberrant. Referring again to FIG. 7, in operation 705, the score for the target pixel is calculated based on the selected connected neighbors. For example, the score if for a target pixel pi is calculated on the basis of the following equation: x-qc-9 si = lei Zai1 (pi Pj) where wi is a weight associated with the pixel, and pi to pk are the selected connected k neighbors.
[0022] The weight w1 for a pixel pi is, for example, determined using the following equation: ## EQU1 ## where stdloc is a local standard deviation network calculated for the pixels located in the window of H by H of the pixel pi, sorted in ascending order, and E is a parameter, for example set to a very small value such as 0.0001. Thus the weight is based on the standard deviations of a sub-range of pixels in the window H by H, the sub-range being selected as the pixels between H and (H2-H) on the basis of the standard deviations of 30 pixels. In alternative embodiments, the weight could be calculated based on the standard deviation of a different subset of the pixels. In alternative embodiments, a different weight could be applied to the scores, where no weight could be applied to the scores. An advantage of applying a weight based on the local standard deviation of the pixel is that the texture of the pixel area can be taken into account, a higher weight being given to the scores of the pixels in soft areas, and lower weighting given the scores of pixels in textured areas where a relatively large gap can be expected. Figure 8C illustrates views 813, 814 of two windows of pixels H by H, and demonstrates the advantage of applying the weight based on the local standard deviation of the pixels. The score of the target pixel in both views should be the same. However, in view 813, there is an edge passing through the target pixel, and so the pixel should not be considered aberrant. In view 814, the image is relatively smooth in the region of the target pixel, and the target pixel should be considered aberrant. The weight wi calculated for view 814 on the basis of the local standard deviation will be greater than the calculated weight for view 813. Referring again to the method of FIG. 6, operation 602 involves for example determining a 25 threshold score based on a probability distribution for expected scores in the image. The present inventors have found that Laplace distribution is particularly well suited to most infrared image scenes. It is known that if S-Laplace (0.6), then ISI-Exp (cr ') 30 is an exponential distribution. The probability density function ISI-Exp (À), with A == o - 1, is therefore of the form f (x), yy-11-x, with 2> 0. Its distribution function is F (x) == 1 - e-1.x. The parameter A of the exponential can be estimated by estimating the average, based on the sample mean, and taking the inverse of this average: 3038194 B14175R 27 À = n -11 In 1161 where n is the number of pixels in the image. By calling the threshold throutlier, this threshold is for example calculated on the basis of  by using the following equation: ln (4) ln (3) -12 throutiier 1.5 x - Rather than calculate the threshold using this equation, an alternative would be simply to choose a threshold that filters out a certain percentage of the 10 scores, such as 95% of the scores. However, an advantage of filtering using the threshold as described above based on the Laplace distribution is that it avoids problems introduced by the noise. Indeed, if a fixed percentage of scores is selected, the number of selected pixels will be the same for the same image with or without noise. However, the threshold determined based on the Laplace distribution will vary depending on the noise level in the image. The operation 603 of FIG. 6 for example involves the identification of parasitic pixels among the aberrant pixels identified in the operation 602. This is for example obtained by selecting scores above a threshold level calculated on the base of aberrant pixel scores. The threshold thrspuR is for example determined using the following equation thrspuR = Q3 + xEI x (Q3-Q1) -13 where xEI is a parameter chosen for example between 1.0 and 5.0 and for example equal to 1, 5, and Q1 and Q3 are the first and third quartiles of the aberrant pixels identified in operation 602, respectively. In some embodiments, to avoid false alerts, a pixel is considered a spurious pixel only if its score exceeds the threshold thrspuR and its score is greater than a minimum threshold thr scoramin equal to a fixed value.
[0023] An advantage of the embodiments described herein is that parasitic pixels can be detected using a relatively simple and efficient method. In addition, the spurious pixel correction method means that, rather than being classified as bad pixels, pixel values, carrying scene information, from certain pixels may continue to be used to generate pixels of pixels. the image. With the description thus made of at least one illustrative embodiment, various alterations, modifications, and improvements will be apparent to those skilled in the art. For example, although a specific example of a micro-bolometer has been described in connection with FIGS. 1 and 2, it will be apparent to those skilled in the art that the methods described herein could be applied to various other implementations. micro-bolometer, or other types of infrared image capture devices. In addition, it will be clear to those skilled in the art that the various operations described in connection with the various embodiments could be carried out, in alternative embodiments, in different orders without impacting their efficiency.
权利要求:
Claims (16)
[0001]
REVENDICATIONS1. A method for correcting pixel-array pixel pixels (102) of an infrared-sensitive image-capturing device, the method comprising: receiving, by a processing device (202), the image-capture device a first input image (RAW) captured by the pixel array (102), and correcting the first input image by applying gain and offset values to pixel values in the first input image; detecting in the first corrected input image at least one spurious pixel, and adding said at least one spurious pixel to a spurious pixel list (LSPUR); receiving, by the processing device (202), a second input image (RAW) captured by the pixel array (102) and correcting the second input image by applying the gain and offset values to values pixels in the second input image; and calculating gain and offset correction values (soff, sGain) for said at least one spurious pixel based on the corrected first and second input images. 20
[0002]
The method of claim 1, further comprising validating the gain and offset correction values (thirst, sGain) by applying them to correct the values of said at least one spurious pixel in a third image captured by the pixel array. (102) and detecting whether said at least one spurious pixel is still detected as a spurious pixel in the third image.
[0003]
The method of claim 2, wherein the third input image is captured at a different pixel array temperature than that of each of the first and second input images.
[0004]
The method of claim 2 or 3, further comprising, prior to calculating the gain and offset correction values, adding said at least one detected spurious pixel to a list of bad pixels, and removing said at 3038194 B14175R minus one detected spurious pixel from the bad pixel list if the gain and offset correction values are validated during the validation step.
[0005]
A method according to any one of claims 1 to 4, wherein the pixel array comprises columns of pixels, each column being associated with a corresponding reference pixel, and wherein the correction of the first and second images of input comprises: determining, on the basis of the input image and a column component vector (VCOL) representing a column deviation introduced by the reference pixels of the pixel array, a first scale factor (a) estimating a level of the column deviation present in the input image; generating column offset values (aVoeL (x)) based on the product of the first scale factor by the values of the column component vector (VCOL); determining, on the basis of the input image and a 2D dispersion matrix (OFFDISP) representing the 2D dispersion introduced by the pixel array, a second scale factor ((3) by estimating a level of the 2D dispersion present in the input image; generating pixel offset values ((3OFFDISP (x, Y)) based on the product of the second scale factor by the values of the 2D dispersion matrix ( OFFDISP) and generate the corrected image (CORR) by applying the column offset and pixel values.
[0006]
The method of any one of claims 1 to 5 wherein the corrected image (CORR) is generated based on the equation: CORKx, y) = GAIN (x, y) x (RAW (x, - a OFFcmfx, y) - OFFDisp (x, y) - y) where RAW is the input image, a and p are scale factors, y is a gain correction value, GAIN (x, y) is a gain value, OFFoeL (x, y) and OFFDIsp (x, y) are offset values, OFFan being a matrix comprising, in each of its rows, the VCOL column vector, OFFDISP being the reference dispersion matrix.
[0007]
The method of any one of claims 1 to 6, wherein calculating the gain and offset correction values (soff, sGain) for said at least one spurious pixel based on the first and second images of corrected input comprises: estimating, based on neighboring pixels in the first input image, a first expected pixel value (PEXP1) of each of said at least one stray pixel; estimating, based on neighboring pixels in the second input image, a second expected pixel value (PEXP2) of each of said at least one stray pixel; and calculating the gain and offset correction values (soff, SGain) based on the estimated first and second expected pixel values.
[0008]
The method of any one of claims 1 to 7, wherein detecting said at least one spurious pixel comprises: computing a score for a plurality of target pixels including at least some of the pixels of the first input image, the score for each target pixel being generated on the basis of k connected neighboring pixels of the input image in a window of H by H pixels around the target pixel, H being an odd integer equal to or greater than 3, and k being an integer between 2 and 5, each of the connected neighboring pixels sharing a border or corner with at least one of the connected neighboring pixels and / or with the target pixel and at least one of the connected neighboring pixels sharing a border or a corner with the target pixel; and detecting that at least one of the pixels is a spurious pixel based on the calculated scores.
[0009]
The method of claim 8, wherein detecting said at least one spurious pixel comprises comparing at least some of the scores to a threshold value (thrSPUR).
[0010]
The method of claim 9, wherein comparing at least some of the scores to a threshold value involves comparing a subset of the scores to the threshold value, the subset including a plurality of the scores higher, and wherein the threshold value is calculated on the basis of the following equation: thrspuR = Q3 + xE / x (Q3-Q1) where XEI is a parameter equal to at least 1.0 and Q1 and Q3 are the first and third quartiles respectively of the subset.
[0011]
The method of claim 9 or 10, wherein said at least some scores are selected by applying another threshold (thr outlier) to the calculated scores.
[0012]
The method of claim 11, wherein the other threshold is calculated based on an assumption that the pixel values in the image have a probability distribution based on the Laplace distribution. 20
[0013]
The method of claim 12, wherein the other threshold is calculated based on the following equation: ln (4) ln (3) = + 1.5 x where λ is an estimate of the parameter of the exponential distribution f ( x) == Ae-I 'corresponding to the absolute value (KSI) of the calculated scores.
[0014]
A computer readable storage medium storing instructions for carrying out the method of any one of claims 1 to 13 when performed by a processing device. 30
[0015]
An image processing apparatus comprising: a memory (206) storing offset and gain values (208, 201) and a list (LSPUR) of spurious pixels (212); a processing device (302) adapted to: throutlier receiving a first input image (RAW) captured by a pixel array (102) of an infrared-sensitive image pickup device, and correcting the first one of iIge input by applying the gain and offset values to pixel values in the first input image; detecting in the first corrected input image (RAW) at least one spurious pixel, and adding said at least one spurious pixel to the spurious pixel list (LSPUR); receiving a second input image (RAW) captured by the pixel array (102) and correcting the second input image (RAW) by applying the gain and offset values to pixel values in the second image of Entrance ; and calculating gain and offset correction values (, sOff, sGain) for said at least one spurious pixel based on the corrected first and second input images.
[0016]
The processing device according to claim 15, wherein the processing device is further adapted to validate the gain and offset correction values (soff, sGain) by applying them to correct the values of said at least one spurious pixel. in a third input image captured by the pixel array (102) and detecting whether said at least one spurious pixel is still detected as a spurious pixel in the third image. B14175X 3038194 33
类似技术:
公开号 | 公开日 | 专利标题
EP2940991B1|2017-09-27|Method for processing an infrared image for correction of non-uniformities
EP3314888B1|2019-02-06|Correction of bad pixels in an infrared image-capturing apparatus
EP2457379B1|2020-01-15|Method for estimating a defect in an image capture system and associated systems
EP2833622B1|2015-11-04|Diagnosis of the faulty state of a bolometric detection matrix
EP3314887B1|2020-08-19|Detection of bad pixels in an infrared image-capturing apparatus
EP3216213A1|2017-09-13|Method for detecting defective pixels
EP3170205B1|2018-12-12|Device for detecting movement
EP1368965B1|2012-08-08|Method and device for electronic image sensing in several zones
JP6036998B2|2016-11-30|Imaging apparatus, image correction method, and image correction program
CA3137232A1|2020-11-05|Method and device for removing remanence in an infrared image of a changing scene
WO2020221774A1|2020-11-05|Method and device for removing remanence in an infrared image of a static scene
FR3082346A1|2019-12-13|DEVICE AND METHOD FOR COMPENSATION OF INTERFERENCE OF HEAT IN AN INFRARED CAMERA
FR3107116A1|2021-08-13|Method of calibrating an optoelectronic device
同族专利:
公开号 | 公开日
CA2990168A1|2016-12-29|
CN107810630A|2018-03-16|
LT3314888T|2019-06-25|
JP6682562B2|2020-04-15|
FR3038194B1|2017-08-11|
US10609313B2|2020-03-31|
JP2018525878A|2018-09-06|
RU2018100398A3|2020-01-20|
CN107810630B|2020-07-24|
KR20180032552A|2018-03-30|
RU2717346C2|2020-03-23|
WO2016207506A1|2016-12-29|
EP3314888A1|2018-05-02|
US20180184028A1|2018-06-28|
EP3314888B1|2019-02-06|
RU2018100398A|2019-07-26|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
EP0651566A1|1993-10-29|1995-05-03|International Business Machines Corporation|Programmable on-focal plane signal processor|
US20040239782A1|2003-05-30|2004-12-02|William Equitz|System and method for efficient improvement of image quality in cameras|
US20110057802A1|2009-09-08|2011-03-10|Karin Topfer|Image quality monitor for digital radiography system|
US20140016005A1|2012-07-13|2014-01-16|Sony Corporation|Information processing apparatus, information processing method, and information processing program|
GB1101597A|1964-07-11|1968-01-31|Elcon Ag|Improvements in and relating to prefabricated buildings|
US5532484A|1994-09-09|1996-07-02|Texas Instruments Incorporated|Defective pixel signal substitution in thermal imaging systems|
US5925880A|1996-08-30|1999-07-20|Raytheon Company|Non uniformity compensation for infrared detector arrays|
US7283164B2|2002-09-18|2007-10-16|Micron Technology, Inc.|Method for detecting and correcting defective pixels in a digital image sensor|
JP4479373B2|2004-06-28|2010-06-09|ソニー株式会社|Image sensor|
US7880777B2|2005-05-26|2011-02-01|Fluke Corporation|Method for fixed pattern noise reduction in infrared imaging cameras|
US7634151B2|2005-06-23|2009-12-15|Hewlett-Packard Development Company, L.P.|Imaging systems, articles of manufacture, and imaging methods|
US8063957B2|2006-03-24|2011-11-22|Qualcomm Incorporated|Method and apparatus for processing bad pixels|
GB0625936D0|2006-12-28|2007-02-07|Thermoteknix Systems Ltd|Correction of non-uniformity of response in sensor arrays|
EP2132726A4|2007-01-16|2011-01-12|Bae Systems Information|Real-time pixel substitution for thermal imaging systems|
FR2918450B1|2007-07-02|2010-05-21|Ulis|DEVICE FOR DETECTING INFRARED RADIATION WITH BOLOMETRIC DETECTORS|
RU2349053C1|2007-07-30|2009-03-10|Федеральное государственное унитарное предприятие "НПО "ОРИОН"|Method of correction of heterogeneity of matrix photointakes with microscanning|
JP2009188822A|2008-02-07|2009-08-20|Olympus Corp|Image processor and image processing program|
US20100141810A1|2008-12-04|2010-06-10|Proimage Technology|Bad Pixel Detection and Correction|
CN105191288B|2012-12-31|2018-10-16|菲力尔系统公司|Abnormal pixel detects|
US8259198B2|2009-10-20|2012-09-04|Apple Inc.|System and method for detecting and correcting defective pixels in an image sensor|
US8493482B2|2010-08-18|2013-07-23|Apple Inc.|Dual image sensor image processing system and method|
US8203116B2|2010-10-19|2012-06-19|Raytheon Company|Scene based non-uniformity correction for infrared detector arrays|
US9064308B2|2011-04-13|2015-06-23|Raytheon Company|System and method for residual analysis of images|
KR20120114021A|2011-04-06|2012-10-16|삼성디스플레이 주식회사|Method for correcting defect pixels|
CA2838992C|2011-06-10|2018-05-01|Flir Systems, Inc.|Non-uniformity correction techniques for infrared imaging devices|
US9743057B2|2012-05-31|2017-08-22|Apple Inc.|Systems and methods for lens shading correction|
FR3009388B1|2013-07-30|2015-07-17|Ulis|DIAGNOSIS OF THE DEFECTIVE STATE OF A BOLOMETRIC DETECTION MATRIX|
JP6092075B2|2013-11-14|2017-03-08|住友重機械工業株式会社|Injection molding machine|
US20150172576A1|2013-12-05|2015-06-18|Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi|Real time dynamic bad pixel restoration method|FR3038195B1|2015-06-26|2018-08-31|Ulis|DETECTION OF PIXEL PARASITES IN AN INFRARED IMAGE SENSOR|
JP2018113614A|2017-01-12|2018-07-19|ソニーセミコンダクタソリューションズ株式会社|Imaging device and imaging method, electronic apparatus, and signal processing device|
KR101918761B1|2018-04-27|2018-11-15|이오시스템|Method and apparatus for processing defect pixel in infrared thermal detector|
法律状态:
2016-06-17| PLFP| Fee payment|Year of fee payment: 2 |
2016-12-30| PLSC| Search report ready|Effective date: 20161230 |
2017-06-16| PLFP| Fee payment|Year of fee payment: 3 |
2018-06-14| PLFP| Fee payment|Year of fee payment: 4 |
2019-06-21| PLFP| Fee payment|Year of fee payment: 5 |
2021-03-12| ST| Notification of lapse|Effective date: 20210205 |
优先权:
申请号 | 申请日 | 专利标题
FR1555963A|FR3038194B1|2015-06-26|2015-06-26|CORRECTION OF PIXEL PARASITES IN AN INFRARED IMAGE SENSOR|FR1555963A| FR3038194B1|2015-06-26|2015-06-26|CORRECTION OF PIXEL PARASITES IN AN INFRARED IMAGE SENSOR|
US15/739,556| US10609313B2|2015-06-26|2016-06-10|Correction of bad pixels in an infrared image-capturing apparatus|
RU2018100398A| RU2717346C2|2015-06-26|2016-06-10|Correction of bad pixels in infrared image capturing device|
CA2990168A| CA2990168A1|2015-06-26|2016-06-10|Correction of bad pixels in an infrared image-capturing apparatus|
JP2017567334A| JP6682562B2|2015-06-26|2016-06-10|False pixel correction in an infrared image capture device|
EP16741067.9A| EP3314888B1|2015-06-26|2016-06-10|Correction of bad pixels in an infrared image-capturing apparatus|
KR1020187000759A| KR20180032552A|2015-06-26|2016-06-10|Correction of parasitic pixels of an infrared image capturing apparatus|
CN201680037758.XA| CN107810630B|2015-06-26|2016-06-10|Method for correcting spurious pixels, computer-readable storage medium, and image processing apparatus|
PCT/FR2016/051393| WO2016207506A1|2015-06-26|2016-06-10|Correction of bad pixels in an infrared image-capturing apparatus|
LTEP16741067.9T| LT3314888T|2015-06-26|2016-06-10|Correction of bad pixels in an infrared image-capturing apparatus|
[返回顶部]